35 research outputs found
Robust multi-modal and multi-unit feature level fusion of face and iris biometrics
Multi-biometrics has recently emerged as a mean of more robust and effcient
personal verification and identification. Exploiting information from multiple
sources at various levels i.e., feature, score, rank or decision, the false acceptance
and rejection rates can be considerably reduced. Among all, feature level fusion
is relatively an understudied problem. This paper addresses the feature level
fusion for multi-modal and multi-unit sources of information. For multi-modal
fusion the face and iris biometric traits are considered, while the multi-unit fusion
is applied to merge the data from the left and right iris images. The proposed
approach computes the SIFT features from both biometric sources, either multi-
modal or multi-unit. For each source, the extracted SIFT features are selected via
spatial sampling. Then these selected features are finally concatenated together
into a single feature super-vector using serial fusion. This concatenated feature
vector is used to perform classification.
Experimental results from face and iris standard biometric databases are
presented. The reported results clearly show the performance improvements in
classification obtained by applying feature level fusion for both multi-modal and
multi-unit biometrics in comparison to uni-modal classification and score level
fusion
Feature Level Fusion of Face and Fingerprint Biometrics
The aim of this paper is to study the fusion at feature extraction level for
face and fingerprint biometrics. The proposed approach is based on the fusion
of the two traits by extracting independent feature pointsets from the two
modalities, and making the two pointsets compatible for concatenation.
Moreover, to handle the problem of curse of dimensionality, the feature
pointsets are properly reduced in dimension. Different feature reduction
techniques are implemented, prior and after the feature pointsets fusion, and
the results are duly recorded. The fused feature pointset for the database and
the query face and fingerprint images are matched using techniques based on
either the point pattern matching, or the Delaunay triangulation. Comparative
experiments are conducted on chimeric and real databases, to assess the actual
advantage of the fusion performed at the feature extraction level, in
comparison to the matching score level.Comment: 6 pages, 7 figures, conferenc
MIS-AVoiDD: Modality Invariant and Specific Representation for Audio-Visual Deepfake Detection
Deepfakes are synthetic media generated using deep generative algorithms and
have posed a severe societal and political threat. Apart from facial
manipulation and synthetic voice, recently, a novel kind of deepfakes has
emerged with either audio or visual modalities manipulated. In this regard, a
new generation of multimodal audio-visual deepfake detectors is being
investigated to collectively focus on audio and visual data for multimodal
manipulation detection. Existing multimodal (audio-visual) deepfake detectors
are often based on the fusion of the audio and visual streams from the video.
Existing studies suggest that these multimodal detectors often obtain
equivalent performances with unimodal audio and visual deepfake detectors. We
conjecture that the heterogeneous nature of the audio and visual signals
creates distributional modality gaps and poses a significant challenge to
effective fusion and efficient performance. In this paper, we tackle the
problem at the representation level to aid the fusion of audio and visual
streams for multimodal deepfake detection. Specifically, we propose the joint
use of modality (audio and visual) invariant and specific representations. This
ensures that the common patterns and patterns specific to each modality
representing pristine or fake content are preserved and fused for multimodal
deepfake manipulation detection. Our experimental results on FakeAVCeleb and
KoDF audio-visual deepfake datasets suggest the enhanced accuracy of our
proposed method over SOTA unimodal and multimodal audio-visual deepfake
detectors by % and %, respectively. Thus, obtaining
state-of-the-art performance.Comment: 8 pages, 3 figure